Back

Journal of the Association for Research in Otolaryngology

Springer Science and Business Media LLC

Preprints posted in the last 30 days, ranked by how well they match Journal of the Association for Research in Otolaryngology's content profile, based on 11 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.

1
Improving Automated Diagnosis of Middle and Inner Ear Pathologies by Estimating Middle Ear Input Impedance from Wideband Tympanometry

Kamau, A. F.; Merchant, G. R.; Nakajima, H. H.; Neely, S. T.

2026-03-31 otolaryngology 10.64898/2026.03.26.26349034 medRxiv
Top 0.1%
8.5%
Show abstract

Conductive hearing loss (CHL) with a normal otoscopic exam can be difficult to diagnose because routine clinical measures such as audiometric air-bone gaps (ABGs) can identify a conductive component but often cannot distinguish among specific underlying mechanical pathologies (e.g., stapes fixation versus superior canal dehiscence, which may produce similar audiograms). Wideband tympanometry (WBT) is a fast, noninvasive test that can provide additional mechanical information across a broad range of frequencies (200 Hz to 8 kHz). However, WBT metrics are influenced by variations in ear canal geometry and probe placement and can be challenging to interpret clinically. In this study, we extend prior WBT absorbance-based classification work by estimating the middle ear input impedance at the tympanic membrane (ZME), a WBT-derived metric intended to reduce ear canal effects. To estimate ZME, we fit an analog circuit model of the ear canal, middle ear, and inner ear to raw WBT data collected at tympanometric peak pressure (TPP). Data from 27 normal ears, 32 ears with superior canal dehiscence, and 38 ears with stapes fixation were analyzed. A multinomial logistic regression classifier was trained using principal component analysis (retaining 90% variance) and stratified 5-fold cross-validation with regularization. We compared feature sets based on ABGs alone, ABGs combined with absorbance, and ABGs combined with the magnitude of ZME. The combination of ABGs and the magnitude of ZME produced the best performance, achieving an overall accuracy of 85.6% compared to 80.4% for ABGs alone and 78.4% for ABGs combined with absorbance. These results suggest that incorporating model-derived middle ear impedance features with standard audiometric measures (ABGs) can improve automated pathology classification for stapes fixation and superior canal dehiscence.

2
Discrimination of spectrally sparse complex-tone triads in cochlear implant listeners

Augsten, M.-L.; Lindenbeck, M. J.; Laback, B.

2026-03-24 neuroscience 10.64898/2026.03.20.712905 medRxiv
Top 0.1%
6.8%
Show abstract

Cochlear implant (CI) users typically experience difficulties perceiving musical harmony due to a restricted spectro-temporal resolution at the electrode-nerve interface, resulting in limited pitch perception. We investigated how stimulus parameters affect discrimination of complex-tone triads (three-voice chords), aiming to identify conditions that maximize perceptual sensitivity. Six post-lingually deafened CI listeners completed a same/different task with harmonic complex tones, while spectral complexity, voice(s) containing a pitch change, and temporal synchrony (simultaneous vs. sequential triad presentation) were manipulated. CI listeners discriminated harmonically relevant one-semitone pitch changes within triads when spectral complexity was reduced to three or five components per voice, with significantly better performance for three-component compared to nine-component tones. Sensitivity was observed for pitch changes in the high voice or in both high and low voices, but not for changes in only the low voice. Single-voice sensitivity predicted simultaneous-triad sensitivity when controlling for spectral complexity and voice with pitch change. Contrary to expectations, sequential triad presentation did not improve discrimination. An analysis of processor pulse patterns suggests that difference-frequency cues encoded in the temporal envelope rather than place-of-excitation cues underlie perceptual triad sensitivity. These findings support reducing spectral complexity to enhance chord discrimination for CI users based on temporal cues.

3
Modeling the Influence of Bandwidth and Envelope on Categorical Loudness Scaling

Neely, S. T.; Harris, S. E.; Hajicek, J. J.; Petersen, E. A.; Shen, Y.

2026-04-01 neuroscience 10.64898/2026.03.30.715393 medRxiv
Top 0.1%
6.7%
Show abstract

In a loudness-matching paradigm, a reduction in the loudness of sounds with bandwidths less than one-half octave compared to a tone of equal sound pressure level has been observed previously for five-tone complexes at 60 dB SPL centered at 1 kHz. Here, this loudness-reduction phenomenon is explored using band-limited noise across wide ranges of frequency and level. Additionally, these measurements are simulated by a model of loudness judgement based on neural ensemble averaging (NEA), which serves as a proxy for central auditory signal processing. Multi-frequency equal-loudness contours (ELC) were measured for each of the adult participants (N=100) with pure-tone average (PTA) thresholds that ranged from normal to moderate hearing loss using a categorical-loudness-scaling (CLS) paradigm. Presentation level and center frequency of the test stimuli were determined on each trial according to a Bayesian adaptive algorithm, which enabled multi-frequency ELC estimation within about five minutes of testing. Three separate test conditions differed by stimulus type: (1) pure-tone, (2) quarter-octave noise and (3) octave noise. For comparison, loudness judgements for all three stimulus types were also simulated by the NEA model, which comprised a nonlinear, active, time-domain cochlear model with an appended stage of neural spike generation. Mid-bandwidth loudness reduction was observed to be greatest at moderate stimulus levels and frequencies near 1 kHz. This feature was approximated by the NEA model, which suggests involvement of an early stage of the central auditory system in the formation of loudness judgements.

4
Hearing sounds when the eyes move: A case study implicating the tensor tympani in eye movement-related peripheral auditory activity

King, C. D.; Zhu, T.; Groh, J. M.

2026-03-25 neuroscience 10.64898/2026.03.24.713974 medRxiv
Top 0.1%
4.8%
Show abstract

Information about eye movements is necessary for linking auditory and visual information across space. Recent work has suggested that such signals are incorporated into processing at the level of the ear itself (Gruters, Murphy et al. 2018). Here we report confirmation that the eye movement signals that reach the ear can produce perceptual consequences, via a case report of an unusual participant with tensor tympani myoclonus who hears sounds when she moves her eyes. The sounds she hears could be recorded with a microphone in the ear in which she hears them (left), and occurred for large leftward eye movements to extreme orbital positions of the eyes. The sounds elicited by this participants eye movements were reminiscent of eye movement-related eardrum oscillations (EMREOs, (Gruters, Murphy et al. 2018, Brohl and Kayser 2023, King, Lovich et al. 2023, Lovich, King et al. 2023, Lovich, King et al. 2023, Abbasi, King et al. 2025, Sotero Silva, Kayser et al. 2025, King and Groh 2026, Leon, Ramos et al. 2026, Sotero Silva, Brohl et al. 2026)), but were larger and longer lasting than classical EMREOs, helping to explain why they were audible to her. Overall, the observations from this patient help establish that (a) eye movement-related signals specifically reach the tensor tympani muscle and that (b) when there is an abnormality involving that muscle, such signals can lead to actual audible percepts. Given that the tensor tympani contributes to the regulation of sound transmission in the middle ear, these findings support that eye movement signals reaching the ear have functional consequences for auditory perception. The findings also expand the types of medical conditions that produce gaze-evoked tinnitus, to date most commonly observed in connection with acoustic neuromas.

5
Vestibular Function Loss Associates With Sensory Epithelium Pathology In Vestibular Schwannoma Patients

Borrajo, M.; Callejo, A.; CASTELLANOS, E.; Amilibia, E.; Llorens, J.

2026-03-25 neuroscience 10.64898/2026.03.23.713132 medRxiv
Top 0.1%
4.4%
Show abstract

Vestibular schwannomas (VS) cause vestibular function loss by mechanisms still poorly understood. We evaluated the vestibulo-ocular reflex by the video-assisted Head Impulse Test (vHIT) in patients with planned tumour resection by a trans-labyrinthine approach. The vestibular sensory epithelia were collected and processed by immunofluorescent labelling for confocal microscopy analysis of sensory hair cell subtypes (type I, HCI, and type II, HCII), calyx endings of the pure-calyx afferents, and the calyceal junction normally found between HCI and the calyx (n=23). Comparing Normofunction and Hypofunction patients, we concluded that worse vestibular function associates with decreased HCI and HCII counts in the sensory epithelia and with increased proportion of damaged calyces. A decrease in the number of HCI and calyx endings of the pure-calyx afferents was recorded to associate with age increase. Partial least squares regression (PLSR) models indicated that VS and age had independent, additive effects on vestibular function. Correlation analyses indicated that lower vHIT gains associate with lower numbers of HCI and increased percentages of damaged calyces. These data support the hypothesis that the deleterious effect of VS on vestibular function is mediated, at least in part, by its damaging impact on the vestibular sensory epithelium. They also provide further evidence for the dependency of the vestibulo-ocular reflex on HCI function and for the calyceal junction pathology as a common response of the sensory epithelium to HC stress.

6
Deficits in tail-lift and air-righting reflexes in rats after ototoxicity associate with loss of vestibular type I hair cells

Palou, A.; Tagliabue, M.; Beraneck, M.; Llorens, J.

2026-03-26 neuroscience 10.64898/2026.03.24.712950 medRxiv
Top 0.1%
3.8%
Show abstract

The rat vestibular system plays a critical role in anti-gravity responses such as the tail-lift reflex and the air-righting reflex. In a previous study in male rats, we obtained evidence that these two reflexes depend on the function of non-identical populations of vestibular sensory hair cells (HC). Here, we caused graded lesions in the vestibular system of female rats by exposing the animals to several different doses of an ototoxic chemical, 3,3-iminodipropionitrile (IDPN). After exposure, we assessed the anti-gravity responses of the rats and then assessed the loss of type I HC (HCI) and type II HC (HCII) in the central and peripheral regions of the crista, utricle and saccule. As expected, we recorded a dose-dependent loss of vestibular function and loss of HCs. The relationship between hair cell loss and functional loss was examined using non-linear models fitted by orthogonal distance regression. The results indicated that both the tail-lift reflex and the air-righting reflexes mostly depend on HCI function. However, a different dependency was found on the epithelium triggering the reflex: while the tail-lift response is sensitive to loss of crista and/or utricle HCIs, the air-righting response rather depends on utricular and/or saccular integrity.

7
Speech-in-Noise Difficulties in Aminoglycoside Ototoxicity Reflects Combined Afferent and Efferent Dysfunction

Motlagh Zadeh, L.; Izhiman, D.; Blankenship, C. M.; Moore, D. R.; Martin, D. K.; Garinis, A.; Feeney, P.; Hunter, L. R.

2026-03-26 otolaryngology 10.64898/2026.03.23.26348719 medRxiv
Top 0.1%
3.7%
Show abstract

Objectives: Patients with Cystic fibrosis (CF) often receive aminoglycosides (AGs) to manage recurrent pulmonary infections, placing them at risk for ototoxicity. Chronic AG use can lead to complex cochlear damage affecting inner and outer hair cells, the stria vascularis, and spiral ganglion neurons. The greatest damage is typically in the basal cochlear region, which encodes high-frequency hearing, with additional involvement of more apical regions. While extended-high-frequency (EHF) hearing loss (EHFHL; 9-16 kHz) is often the earliest sign of AG ototoxicity, speech in noise (SiN) effects are rarely studied. Our overall hypothesis is that SiN perception difficulties in individuals with CF, treated with AGs, are related to combined cochlear and neural damage, primarily in the EHF range but also in the standard frequency (SF; 0.25-8 kHz) range. Three mechanisms that contribute to SiN perception were evaluated in children and young adults: 1) a primary effect of reduced EHF sensitivity, measured by pure-tone audiometry (PTA) and transient-evoked otoacoustic emissions (TEOAEs); 2) a secondary effect of subclinical damage in the SF range, measured by PTA and TEOAEs; and 3) additional neural effects, measured by middle ear muscle reflex (MEMR) threshold (afferent) and growth functions (efferent).Design:A total of 185 participants were enrolled; 101 individuals with CF treated with intravenous AGs and 84 age and sex-matched Controls without hearing concerns or CF. Assessments included EHF and SF PTA; the Bamford-Kowal-Bench (BKB)-SIN test for SiN perception; double-evoked TEOAEs with chirp stimuli from 0.71 to 14.7 kHz; and ipsilateral and contralateral wideband MEMR thresholds and growth functions using broadband stimuli. Results: Reduced sensitivity at EHFs (PTA, TEOAEs) was not associated with impaired SiN perception in the CF group. SF hearing, regardless of EHF status, was the primary predictor of SiN performance in the CF group. Increased MEMR growth was also significantly associated with poorer SiN in the CF group. Conclusions: In CF, impaired SiN perception was primarily predicted by SF hearing impairment, with additional involvement of the efferent auditory pathway through increased MEMR growth. These results build on prior evidence for efferent neural effects due to ototoxic exposures, supporting both sensory (afferent) and neural (efferent) mechanisms that contribute to listening difficulties in CF. Thus, preventive and intervention strategies should consider these combined mechanisms in people with AG ototoxicity to address their SiN problems.

8
Saccade-related sound pulses and phase-resetting contribute to eye movement-related eardrum oscillations (EMREOs)

King, C. D.; Groh, J. M.

2026-03-27 neuroscience 10.64898/2026.03.25.714060 medRxiv
Top 0.1%
1.5%
Show abstract

Eye movement-related eardrum oscillations (EMREOs) appear to consist of a pulse of oscillation occurring in conjunction with saccades. However, this apparent pulse could occur either because there is an increase in energy at that frequency at the time of saccades (a true pulse), or because there is saccade-related phase resetting of ongoing energy at that frequency band, thus appearing like a pulse when averaged in the time domain across many trials. Here we conducted a spectral analysis at the individual trial level in humans performing a visually guided saccade task to determine whether the power at the EMREO frequency (30-45 Hz) is higher during saccades than during steady fixation. We found both an increase in sound power in the EMREO frequency band associated with saccades, i.e. sound pulses at the individual trial level, as well as, phase resetting at saccade onset/offset. While both factors contribute to the apparently pulse-like EMREO signal, phase resetting appears to be more prevalent across participants. The prevalence of phase resetting has implications for the underlying mechanism(s) producing EMREOs as well as functional consequences for how the ear might respond to incoming sound in an eye-position dependent fashion.

9
Association of Otolithic Integrity With Subjective and Functional Outcomes in Vestibular Rehabilitation: A Pilot Study

Cortes, Y. H.; Ramos Maldonado, D.; Romo, V. S.; Annel, G.-C.; Leyva, I. C.

2026-04-03 rehabilitation medicine and physical therapy 10.64898/2026.04.01.26349994 medRxiv
Top 0.1%
1.0%
Show abstract

Variable recovery in vestibular rehabilitation underscores the need for objective biomarkers to identify patients at risk of poor clinical outcomes. This study aimed to establish proof of concept for a multidimensional prognostic framework using structural cervical vestibular evoked myogenic potential (cVEMP) and functional modified Clinical Test of Sensory Interaction on Balance (mCTSIB) markers to predict therapeutic success. This prospective cohort study was conducted at a tertiary rehabilitation center between June 2023 and May 2025. Participants were adults with peripheral vestibular disorders, including unilateral vestibular dysfunction, Meniere disease, or superior semicircular canal dehiscence. All participants underwent a customized five-session vestibular rehabilitation protocol. Primary outcomes were subjective clinical success, defined as an 18-point reduction in Dizziness Handicap Inventory (DHI) score, and functional success, defined as a 3-point increase in Dynamic Gait Index score. Among 30 participants (mean age 60.8 years; 77% female), the rehabilitation protocol was associated with significant improvements in mean DHI (53.7 to 37.8; P = .003) and Dynamic Gait Index (19.5 to 22.1; P = .003) scores. While 83% of participants showed raw DHI improvement, only 37% achieved the 18-point minimal clinically important difference. Notably, no participants in the bilateral cVEMP absence group achieved subjective success, compared with 52.6% in the bilateral present group (P trend = .08). Multivariable logistic regression identified baseline DHI severity as an independent predictor of success (odds ratio, 1.05; 95% CI, 1.00-1.10; P = .04). Functional gait success was significantly correlated with baseline vestibular and visual preference ratios. These findings suggest that baseline otolithic structural integrity is a primary determinant of subjective recovery. Bilateral structural loss may represent a "structural floor" where meaningful relief is physiologically limited despite functional gains. These results support a precision-based model using structural and sensory biomarkers to tailor rehabilitation

10
Composite Biofidelity: Addressing Metric Degeneracy in Biomechanical Model Validation and Machine Learning Loss Design

Koshe, A.; Sobhani-Tehrani, E.; Jalaleddini, K.; Motallebzadeh, H.

2026-04-08 bioengineering 10.64898/2026.04.05.716563 medRxiv
Top 0.2%
0.9%
Show abstract

Spectral similarity is often judged with a single metric such as RMSE, yet this can be misleading: physically different errors can produce similar scores. This is a critical limitation for computational biomechanics, where spectral agreement underpins both model validation and machine-learning loss design. Here, we develop a multi-metric framework for objective spectral biofidelity and test whether it better captures meaningful disagreement across complex frequency-domain responses. We evaluated 12 complementary similarity metrics, including CORA and ISO/TS 18571, using controlled spectral perturbations that mimic common real-world deviations such as resonance shifts, localized spikes, and broadband tilts. We then applied the framework to an SBI-tuned finite-element middle-ear model to assess convergence with training dataset size and robustness to measurement noise across repeated stochastic runs. No single metric performed reliably across all distortion types. Shape-based metrics tracked resonance morphology but could miss vertical scaling, whereas MaxError remained important for narrowband anomalies that smoother metrics underweighted. CORA and ISO 18571 did not consistently outperform simpler metrics. Rank aggregation using Borda count provided a robust consensus across metrics, enabling objective identification of training-data saturation and noise thresholds beyond which similarity rankings became unstable. These results show that spectral biofidelity cannot be reduced to a single norm. A multi-metric consensus provides a clearer and more physically meaningful basis for comparing experimental and simulated spectra, and offers a more defensible foundation for data-fidelity terms in physics-informed and simulation-based machine learning.

11
Investigating neural speech processing with functional near infrared spectroscopy: considerations for temporal response functions

Wilroth, J.; Sotero Silva, N.; Tafakkor, A.; de Avo Mesquita, B.; Ip, E. Y. J.; Lau, B. K.; Hannah, J.; Di Liberto, G. M.

2026-03-23 neuroscience 10.64898/2026.03.20.713212 medRxiv
Top 0.2%
0.7%
Show abstract

Functional near infrared spectroscopy (fNIRS) is increasingly used in hearing and communication research, with advantages such as robustness to movement artifacts, improved spatial resolution, and flexibility of contexts in which it can be applied. At the same time, the field is progressively moving towards more continuous, naturalistic listening paradigms resulting in the widespread adoption of speech tracking analyses such as temporal response functions (TRFs) in electroencephalography (EEG) and magnetoencephalography (MEG) studies. However, it remains unclear whether these analyses can be applied to slower haemodynamic signals measured by fNIRS. In the present study, we investigated whether a TRF framework can similarly be applied to fNIRS data recorded during continuous speech perception. Eight participants listened to speech simultaneously while fNIRS signals were acquired in a hyperscanning setup. Speech features were regressed onto the haemodynamic responses to test the feasibility and interpretability of fNIRS-based TRFs. Prediction correlations between observed and modelled fNIRS signals across speech features were higher than those typically reported for EEG- and comparable to those reported for MEG-TRF studies. Moreover, these correlations did not overlap with a null distribution generated from triallJmismatched fNIRS data, confirming statistical significance and were slightly greater than those obtained from a conventional GLM approach. Our findings support that TRF estimation method can yield meaningful and statistically significant responses from fNIRS data. HighlightsO_LITRF modelling can be meaningfully applied to fNIRS data acquired during speech listening tasks. C_LIO_LIPrediction correlations between actual and modelled fNIRS signals were above chance level, with values comparable to previous EEG/MEG studies. C_LIO_LITRFs explained more fNIRS variance than a conventional GLM approach. C_LI

12
Acoustic Salience Drives Pupillary Dynamics in an Interrupted, Reverberant Task

Figarola, V.; Liang, W.; Luthra, S.; Parker, E.; Winn, M.; Brown, C.; Shinn-Cunningham, B. G.

2026-04-02 neuroscience 10.64898/2026.03.31.715639 medRxiv
Top 0.2%
0.4%
Show abstract

Listeners face many challenges when trying to maintain attention to a target source in everyday settings; for instance, reverberation distorts acoustic cues and interruptions capture attention. However, little is known about how these challenges affect the ability to maintain selective attention. Here, we measured syllable recall accuracy and pupil dilation during a spatial selective attention task that was sometimes disrupted. Participants heard two competing, temporally interleaved syllable streams presented in pseudo-anechoic or reverberant environments. On randomly selected trials, a sudden interruption occurred mid-sequence. Compared to anechoic trials, reverberant performance was worse overall, and the interrupter disrupted performance. In uninterrupted trials, reverberation reduced peak pupil dilation both when it was consistent across all stimuli in a block and when it was randomized trial to trial, suggesting temporal smearing reduced clarity of the scene and the salience of events in the ongoing streams. Pupil dilations in response to interruptions indicated perceptual salience was strong across reverberant and anechoic conditions. Specifically, baseline pupil size before trials did not vary across room conditions, and mixing or blocking of trials (altering stimulus expectations) had no impact on pupillary responses. Together, these findings highlight that stimulus salience drives cognitive load more strongly than does task performance.

13
The RNA editing enzyme ADARB1 is readily detectable in primary auditory neurons and provides a means for automated counting

Fincher, G. C.; Thapa, P.; Gressett, S. C.; Walters, B. J.

2026-03-29 neuroscience 10.64898/2026.03.26.714550 medRxiv
Top 0.2%
0.3%
Show abstract

Spiral ganglion neurons (SGNs) are the primary auditory afferents in the inner ear. These neurons degenerate in response to a number of conditions, including auditory neuropathies, concussions, and aging. Research to assess the extent of degeneration and to test the efficacy of protective or rehabilitative strategies requires quantification of SGNs from tissue sections. However, manual counting of SGNs can be arduous and time-consuming due to dense crowding and the lack of reliable nuclear-specific labels. SGNs receive afferent input via GluA2-containing AMPA receptors. As the Gria2 transcripts that code for GluA2 must undergo RNA editing to ensure calcium impermeability, we hypothesized that SGNs would express high levels of the adenosine deaminase acting on RNA (ADAR) enzyme ADARB1. Here we confirm enriched expression of Adarb1 in SGNs via in situ hybridization and show that anti-ADARB1 antibodies robustly label the nuclei of both type I and type II SGNs in cochlear sections from young and aged mice. Neuronal specificity was confirmed using antibodies against neurofilament heavy chain (NFH), human antigen D (HuD), GATA binding protein 3 (GATA3), and SRY-box 2 (SOX2). A blinded investigator manually counted SGNs via NFH staining, and these were compared to automated counts of ADARB1-positive nuclei using the analyze particles function in ImageJ. A concordance correlation coefficient and Bland-Altman analysis demonstrated strong agreement between the manual and automated counts. Additionally, immunolabeling of ADARB1 in macaque and human temporal bone sections confirm robust labeling of SGN nuclei, suggesting broad utility of ADARB1 immunolabeling for automated counts of SGNs across species.

14
BioDCASE: Using data challenges to make community advances in computational bioacoustics

Stowell, D.; Nolasco, I.; McEwen, B.; Vidana Vila, E.; Jean-Labadye, L.; Benhamadi, Y.; Lostanlen, V.; Dubus, G.; Hoffman, B.; Linhart, P.; Morandi, I.; Cazau, D.; White, E.; White, P.; Miller, B.; Nguyen Hong Duc, P.; Schall, E.; Parcerisas, C.; Gros-Martial, A.; Moummad, I.

2026-04-06 animal behavior and cognition 10.64898/2026.04.02.716062 medRxiv
Top 0.3%
0.2%
Show abstract

Computational bioacoustics has seen significant advances in recent decades. However, the rate of insights from automated analysis of bioacoustic audio lags behind our rate of collecting the data - due to key capacity constraints in data annotation and bioacoustic algorithm development. Gaps in analysis methodology persist: not because they are intractable, but because of resource limitations in the bioacoustics community. To bridge these gaps, we advocate the open science method of data challenges, structured as public contests. We conducted a bioacoustics data challenge named BioDCASE, within the format of an existing event (DCASE). In this work we report on the procedures needed to select and then conduct useful bioacoustics data challenges. We consider aspects of task design such as dataset curation, annotation, and evaluation metrics. We report the three tasks included in BioDCASE 2025 and the resulting progress made. Based on this we make recommendations for open community initiatives in computational bioacoustics.

15
FRMPD4, a causal gene for intellectual disability and epilepsy, is associated with X-linked non-syndromic hearing loss

Liedtke, D.; Rak, K.; Schrode, K. M.; Hehlert, P.; Chamanrou, N.; Bengl, D.; Katana, R.; Heydaran, S.; Doll, J.; Han, M.; Nanda, I.; Senthilan, P. R.; Juergens, L.; Bieniussa, L.; Voelker, J.; Neuner, C.; Hofrichter, M. A.; Schroeder, J.; Schellens, R. T.; de Vrieze, E.; van Wijk, E.; Zechner, U.; Herms, S.; Hoffmann, P.; Mueller, T.; Dittrich, M.; Bartsch, O.; Krawitz, P. M.; Klopocki, E.; Shehata-Dieler, W.; Maroofian, R.; Wang, T.; Worley, P. F.; Goepfert, M. C.; Galehdari, H.; Lauer, A. M.; Haaf, T.; Vona, B.

2026-03-30 genetic and genomic medicine 10.64898/2026.03.27.26349271 medRxiv
Top 0.3%
0.2%
Show abstract

Abstract Background Understanding the phenotypic spectrum of disease-associated genes is essential for accurate diagnosis and targeted therapy. FRMPD4 (FERM and PDZ Domain Containing 4) has previously been associated with intellectual disability and epilepsy. However, its potential role in non-syndromic hearing loss has not been explored. Methods We performed genetic analysis in two unrelated families presenting with non-syndromic sensorineural hearing loss, identifying maternally inherited missense variants in FRMPD4. Clinical phenotyping included audiological assessment and evaluation for neurodevelopmental involvement. Cross-species expression analyses were conducted in Drosophila, zebrafish, and mouse. Functional characterization included quantitative evaluation of sound-evoked responses in Drosophila nicht gut hoerend (ngh) mutants, assessment of neuronal development and acoustic startle responses in zebrafish loss of function models, and morphological cochlear analyses with auditory brainstem response measurements in knockout mice. Results Three affected males from two unrelated families presented with prelingual, bilaterally symmetrical sensorineural hearing loss, with confirmed congenital onset in one individual and no evidence of neurodevelopmental abnormalities. Cross-species analyses demonstrated evolutionarily conserved expression of FRMPD4 in auditory structures. In Drosophila, quantitative analysis of sound-evoked responses in ngh mutants revealed impaired auditory function. Zebrafish loss of function models exhibited reduced neuronal populations in the otic vesicle and posterior lateral line, abnormal neuromast development, and diminished acoustic startle responses. In mice, Frmpd4 knockout resulted in high-frequency hearing loss and cochlear abnormalities consistent with the human phenotype. Conclusions Our findings expand the phenotypic spectrum of FRMPD4 to include non-syndromic sensorineural hearing loss and establish its evolutionarily conserved role in auditory function. These results have direct implications for genetic diagnosis and variant interpretation in patients with hearing loss.

16
Sparse Stimulus Generation Improves Reverse Correlation Efficiency and Interpretability

Gargano, J. A.; Rice, A.; Chari, D. A.; Parrell, B.; Lammert, A. C.

2026-03-26 neuroscience 10.64898/2026.03.24.714012 medRxiv
Top 0.4%
0.1%
Show abstract

Reverse correlation is a widely-used and well-established method for probing latent perceptual representations in which subjects render subjective preference responses to ambiguous stimuli. Stimuli are purposefully designed to have no direct relationship with the target representation (e.g., they are randomly-generated), a property which makes each individual stimulus minimally informative toward reconstructing the target, and often difficult to interpret for subjects. As a result, a large number of stimulus-response pairs must be gathered from a given subject in order for reconstructions to be of sufficient quality, making the task fatiguing. Recent work has demonstrated that the number of trials needed can be substantially reduced using a compressive sensing framework that incorporates the assumption that the target representation can be sparsely represented in some basis into the reconstruction process. Here, we introduce an alternative method that incorporates the sparsity assumption directly into stimulus generation, which holds promise not only for improving efficiency, but also for improving the interpretability of stimuli from subjects perspective. We develop this new method as a mathematical variation of the compressive sensing approach, before conducting one simulation study and two human subjects experiments to assess the benefits of this method to reconstruction quality, sample size efficiency, and subjective interpretability. Results show that sparse stimulus generation improves all three of these areas relative to conventional reverse correlation approaches, and also relative to compressive sensing in most conditions.

17
Transcriptional regulation of the main olfactory epithelium by environmental olfactory exposures

Haran, V.; Chu, C.-Y.; Owens, R. E.; Mariani, T. J.; Meeks, J. P.; Rowe, R. K.

2026-03-26 neuroscience 10.64898/2026.03.24.713727 medRxiv
Top 0.4%
0.1%
Show abstract

The nasal epithelium is a complex tissue composed of both respiratory and olfactory tissue, and is constantly exposed to environmental insults, including toxins and pathogens. The main olfactory epithelium (MOE) serves as the critical site for olfaction, or sense of smell. Dysfunction at this critical barrier tissue can result in partial or total loss of olfactory function, resulting in significant impact to quality of life. The MOE is heterogeneous, comprised of many cell types including olfactory sensory neurons, support cells, and immune cells. It is not well understood how these diverse cell types in the MOE interact to regulate this tissue during homeostasis, and during times of injury and inflammation. We investigated how environmental olfactory exposures impact cell type specific transcriptional responses in the mouse MOE. We performed single-cell RNA sequencing (scRNA-seq) of the MOE following controlled environmental exposure to both well-known odorants and allergens. We identified major cell types and subtypes within the MOE, and identified transcriptional changes in response to the olfactory exposures. We identified transcriptional changes in OSNs, sustentacular cells, and resident immune cells to each condition. This indicated that environmental olfactory exposures drive changes to multiple cell types in the MOE. To our knowledge, this is the first study to identify effects of environmental olfactory exposures on cell-type specific transcription at homeostasis. These findings highlight the potential importance of multi-cellular interactions and communication in regulation of the olfactory epithelium.

18
Infra-delta oscillatory structure in expressive piano performance: evidence for a shared motor timing mechanism

Proverbio, A. M.; Qin, C.

2026-03-30 neuroscience 10.64898/2026.03.27.714869 medRxiv
Top 0.5%
0.1%
Show abstract

This study examines the temporal dynamics of expressive piano performance by means of a quantitative analysis of motor timing in an elite pianist, with particular reference to stylistic contrasts between Baroque and Romantic repertoire. In line with kinematic models of expressive timing, which describe musical performance as reflecting principles of biological motion, we examined whether a common temporal structure underlies stylistically divergent executions. Despite marked differences in structural complexity and gesture density, both performances exhibited a shared low-frequency oscillatory pattern ([~]0.36 Hz) in beat-level timing variability. This infra-delta rhythmic modulation is consistent with the presence of an underlying motor timing scaffold and suggests a common temporal organization across expressive behaviors. These findings support the hypothesis that musical performance relies on a rhythmically structured control architecture, potentially shared with other complex motor activities such as speech and locomotion.

19
Rites of Passage: Professional Identity Formation and the OTOHNS Oral Board Exam

McMains, K.

2026-03-19 otolaryngology 10.64898/2026.03.19.26347858 medRxiv
Top 0.5%
0.0%
Show abstract

ObjectivesProfessional Identity Formation has been defined as an individual internalizing the values and norms of the medical profession in ways that result in thinking, acting, and feeling like a physician. During the COVID-19 Pandemic, the ABOHNS pivoted the format of the oral board exam from in-person exams to virtually administered exams. In light of this, we ask: O_LIHow, if at all, do Otolaryngology-Head and Neck Surgery Oral Board Examinations shape examinee professional identity? C_LIO_LIDo different formats of administering Otolaryngology-Head and Neck Surgery Oral Board Examinations have different effects on examinee Professional Identity Formation (PIF)? C_LI MethodsThematic analysis was used to explore candidate experience. We developed and tested a shortened Professional Identity Essay that foregrounds the PIF effects resulting from differing methods of administering the Oral Board Examination. Themes generated from semi-structured interviews were compared to identify differences Professional Identity resulting from OBEs. ResultsNineteen participants enrolled in our study, each completing a single interview lasting between 15-30 minutes. We found participants responses to coalesce around 3 themes: educational effect of the OBE on PIF; different OBE formats carried distinct stresses; and the catalytic effect on PIF from in-person OBE. ConclusionsParticipating in either format of the ABOHNS OBE demonstrated and educational effect on PIF. Additionally, when delivered in an in-person format, the ABOHNS OBE also catalyzed ongoing PIF. This effect of the OBE offers an additional potent mechanism to integrate the most inclusive range of candidates into the community of Otolaryngology practice. Level of Evidence: VI(Single Qualitative Study investigating perspectives of healthcare providers on a specific intervention)

20
Neuronal Dynamics During Isoflurane Induction in Caenorhabditis elegans

White, H.; Bosinski, C.; Gabel, C. V.; Connor, C.

2026-04-02 neuroscience 10.64898/2026.03.31.715586 medRxiv
Top 0.5%
0.0%
Show abstract

BackgroundHow does neuronal activity change as an animal transitions from being awake to a state of general anesthesia? Previous studies used C. elegans to investigate awake and anesthetized states, emergence from anesthesia, and to establish metrics characterizing how system-wide neuronal dynamics differ under these conditions. This study employs a new technique to image pan-neuronal activity in C. elegans continuously during induction of anesthesia with isoflurane. MethodsC. elegans worms expressing pan-neuronal nuclear RFP and cytosolic GCaMP6s were imaged with light sheet microscopy to measure single cell activity in the majority of neurons in the animals head during induction via isoflurane exposure. Stable concentrations of isoflurane were maintained throughout the experiment by measured flow vaporization of isoflurane into a specially designed gas enclosure compatible with the imaging system. Building on our previous work investigating emergence from anesthesia, we analyzed ensemble neuronal activity, spectrograms of frequency over time, and metrics of information flow between neurons. ResultsInduction of isoflurane anesthesia caused a progressive reduction in neuronal activity over the course of 40 minutes. Spectrograms indicated a loss of bulk signal power across all frequencies, notably in low frequencies too. State Decoupling and Internal Predictability were among the most useful metrics for discriminating the anesthetized state, demonstrating induction kinetics that are the inverse of emergence. However, each animal does not arrive at the anesthetized state at the same time; response times are highly individualized. ConclusionsInformation metrics of neurodynamic activity demonstrate that isoflurane induction results in a gradual increase in neuronal disconnection and disorganization. Thus, at the level of individual neuron connectivity and system dynamics, the induction of anesthesia in C. elegans nematodes is in essence the reverse of emergence. Induction however occurs more rapidly and shows marked variability between individuals. Future genetic studies will show which molecular targets define sensitivity to volatile anesthetics like isoflurane. Summary StatementIsoflurane-induced unconsciousness is a common phenomenon across species. Does the induction of anesthesia arise by distinct state transitions, or through gradual changes in system dynamics when activity is measured at the level of individual neurons?